Convergence Rate of Expectation-Maximization

نویسندگان

  • Raunak Kumar
  • Mark Schmidt
چکیده

Expectation-maximization (EM) is an iterative algorithm for finding the maximum likelihood or maximum a posteriori estimate of the parameters of a statistical model with latent variables or when we have missing data. In this work, we view EM in a generalized surrogate optimization framework and analyze its convergence rate under commonly-used assumptions. We show a lower bound on the decrease in the objective function value on each iteration, and use it to provide the first convergence rate for non-convex functions in the generalized surrogate optimization framework and, consequently, for the EM algorithm. We also discuss how to improve EM by using ideas from optimization.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online Expectation Maximization based algorithms for inference in hidden Markov models

The Expectation Maximization (EM) algorithm is a versatile tool for model parameter estimation in latent data models. When processing large data sets or data stream however, EM becomes intractable since it requires the whole data set to be available at each iteration of the algorithm. In this contribution, a new generic online EM algorithm for model parameter inference in general Hidden Markov ...

متن کامل

Ten Steps of EM Suffice for Mixtures of Two Gaussians

We provide global convergence guarantees for the expectation-maximization (EM) algorithm applied to mixtures of two Gaussians with known covariance matrices. We show that EM converges geometrically to the correct mean vectors, and provide simple, closed-form expressions for the convergence rate. As a simple illustration, we show that in one dimension ten steps of the EM algorithm initialized at...

متن کامل

On Convergence of the EM-ML Algorithm for PET Reconstruction

The EM-ML (expectation-maximization, maximum-likelihood) algorithm for PET reconstruction is an iterative method. Sequence convergence to a fixed point that satisfies the Karush-Kuhn-Tucker conditions for optimality has previously been established [1, 2, 3]. This correspondence first gives an alternative proof of sequence convergence and optimality based on direct expansion of certain Kullback ...

متن کامل

On maximizing the convergence rate for linear systems with input saturation

In this note, we consider a few important issues related to the maximization of the convergence rate inside a given ellipsoid for linear systems with input saturation. For continuous-time systems, the control that maximizes the convergence rate is simply a bang–bang control. Through studying the system under the maximal convergence control, we reveal several fundamental results on set invarianc...

متن کامل

Optimization with EM and Expectation-Conjugate-Gradient

We show a close relationship between the Expectation Maximization (EM) algorithm and direct optimization algorithms such as gradientbased methods for parameter learning. We identify analytic conditions under which EM exhibits Newton-like behavior, and conditions under which it possesses poor, first-order convergence. Based on this analysis, we propose two novel algorithms for maximum likelihood...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017